home
***
CD-ROM
|
disk
|
FTP
|
other
***
search
/
Animation How-To
/
Animation How-to CD.iso
/
PLY
/
ADDENDUM
next >
Wrap
Text File
|
1994-01-01
|
12KB
|
300 lines
Last minute additions...
After getting the documents into shape for v1.5, I made several bug fixes and
additions to make v1.6. Rather than fix up the full document file (which will
be done before the next release), this addendum file is provided to give
information about the changes. The new features are:
Depth mapped lights
Depth rendering (to save Z-Buffer information)
Displacement surfaces
Raw triangle vertex output
Directional light sources
Global haze
UV mapping and bounds
Hicolor display (VESA only, may not work right for you)
Texture maps and indexed textures
Summed textures
UV triangles
Static variables
I) Depth mapped lights and depth rendering
Depth mapped lights are very similar to spotlights, in the sense that they
point from one location and at another information. The primary use for these
is for doing shadowing in scan converted scenes where shadow information
might not be possible using the raytracer (see displacement surfaces). The
format of their declaration is:
depthmapped_light {
[ angle fexper ]
[ aspect fexper ]
[ at vexper ]
[ color expression ]
[ depth "depthfile.tga" ]
[ from vexper ]
[ hither fexper ]
[ up vexper ]
}
You may notice that the format of the declaration is very similar to the
viewpoint declaration. This is intentional, as you will usually generate
the depth information for "depthfile.tga" as the output of a run of Polyray.
To support output of depth information, a new statements was added to the
viewpoint declaration. The declaration to output a depth file would have the
form:
viewpoint {
from [ location of depth mapped light ]
at [ location the light is pointed at ]
...
image_format 1
}
Where the final statement tells Polyray to output depth information instead of
color information. Note that if the value in the image_format statement is
0, then normal rendering will occur. For an example of using a depth mapped
light, see the file "room1.pi" in the data archives.
II) Displacement Surfaces
Displacement surfaces cause modification of the shape of an object as it
is being rendered. The amount and direction of the displacement are specified
by an object modifier statement:
displace vexper
Where the expression is a vector that tells Polyray how to do the displacement.
This feature only works for scan converted images. The raytracer will only
see the undistorted surface. For some examples of displacement surface, see
the following files in the data archives:
disp2.pi, disp3.pi, legen.pi, spikes.pi
III) Raw triangle vertex information
A somewhat odd addition to the image output formats for Polyray is the
generation of raw triangle information. What happens is very similar to the
scan conversion process, but rather than draw polygons, Polyray will write
a text description of the polygons (after splitting them into triangles). The
final output is a (usually long) list of lines, each line describing a single
smooth triangle. The format of the output is:
x1 y1 z1 x2 y2 z2 x3 y3 z3 nx1 ny1 nz1 nx2 ny2 nz2 nx3 ny3 nz3 u1 v1 u2 v2 u3 v3
The locations of the three vertices come first, the normal information for
each of the vertices follows. Lastly the uv values for each triangle are
generated based on the surface you are rendering (see uv triangles below).
Currently I don't have any applications for this output. The intent of this
feature is to provide a way to build models in polygon form for conversion to
another renderers' input format.
For example, to produce raw triangle output describing a sphere, and dump it
to a file you could use the command:
polyray sphere.pi -p z > sphere.tri
IV) Directional lights
The directional light means just that, light coming from some direction. The
biggest difference between this light source and the others is that no shadowing
is performed. This has pretty serious implications for shading, so if you
use this type of light, you should also set the global shading flags so that
surfaces are one-sided. i.e. polyray foo.pi -q 55. The format of the
expression is:
directional_light color, direction
directional_light direction
An example would be: directional_light <2, 3, -4>, giving a white light coming
from the right, above, and behind the origin.
V) Global Haze
The global haze is a color that is added based on how far the ray travled before
hitting the surface. The format of the expression is:
haze coeff, starting_distance, color
The color you use should almost always be the same as the background color.
The only time it would be different is if you are trying to put haze into a
valley, with a clear sky above (this is a tough trick, but looks nice). A
example would be:
haze 0.8, 3, midnight_blue
The value of the coeff ranges from 0 to 1, with values closer to 0 causing
the haze to thicken, and values closer to 1 causing the haze to thin out.
I know it seems backwards, but it is working and I don't want to break anything.
VI) UV mapping and bounds
In addition to the runtime variables x, y, P, etc. the variables u and v
have been added. In general u varies from 0 to 1 as you go around an object
and v varies from 0 to one as you go from the bottom to the top of an object.
Not all primitives set meaningful values for u and v, those that do are:
bezier, cone, cylinder, disc, sphere, torus, patch
These variables can be used in a couple of ways, to tell Polyray to only
render portions of a surface within certain uv bounds, or they can be used
as arguments to expressions in textures or displacement functions.
See the file uvtst.pi in the data archives for an example of using uv bounds
on objects. The file spikes.pi demonstrates using uv as variables in a
displacement surface. The file bezier1.pi demonstrates using uv as variables
to stretch an image over the surface of a bezier patch.
VII) Hicolor display output
Polyray will support the VESA 640x480 hicolor graphics mode for display preview.
The command line switch is "-V 2". In polyray.ini, you would use
"display hicolor". Note that using any of the status display options can
really screw up the picture. I recommend "-t 0" if you are going to use
this option.
Future versions will work better. I just got a board that could handle hicolor,
so I'm still experimenting.
VIII) Texture maps and indexed textures
A texture map is declared in a manner similar to color maps. There is a
list of value pairs and texture pairs, for example:
define index_tex_map
texture_map([-2, 0, red_blue_check, bumpy_green],
[0, 2, bumpy_green, reflective_blue])
Note that for texture maps there is a required comma separating each of the
entries.
These texture maps are complimentary to the indexed texture. Two typical
uses of indexed textures are to use solid texturing functions to select
(and optinally blend) between complete textures rather than just colors, and
to use image maps as a way to map textures to a surface.
For example, using the texture map above on a sphere can be done accomplished
with the following:
object {
sphere <0, 0, 0>, 2
texture { indexed x, index_tex_map }
}
The indexed texture uses a lookup function (in this case a simple gradient
along the x axis) to select from the texture map that follows. See the
data file "indexed1.pi" for the complete example.
As an example of using an image map to place textures on a surface, the
following example uses several textures, selected by the color values in
an image map. The function "indexed_map" returns the color index value from
a color mapped Targa image (or uses the red channel in a raw Targa).
object {
sphere <0, 0, 0>, 1
texture {
indexed indexed_map(image("txmap.tga"), <x, 0, y>, 1),
texture_map([1, 1, mirror, mirror],
[2, 2, bright_pink, bright_pink],
[3, 3, Jade, Jade])
translate <-0.5, -0.5, 0> // center image